6 research outputs found

    Intelligent machine for ontological representation of massive pedagogical knowledge based on neural networks

    Get PDF
    Higher education is increasingly integrating free learning management systems (LMS). The main objective underlying such systems integration is the automatization of online educational processes for the benefit of all the involved actors who use these systems. The said processes are developed through the integration and implementation of learning scenarios similar to traditional learning systems. LMS produce big data traces emerging from actors’ interactions in online learning. However, we note the absence of instruments adequate for representing knowledge extracted from big traces. In this context, the research at hand is aimed at transforming the big data produced via interactions into big knowledge that can be used in MOOCs by actors falling within a given learning level within a given learning domain, be it formal or informal. In order to achieve such an objective, ontological approaches are taken, namely: mapping, learning and enrichment, in addition to artificial intelligence-based approaches which are relevant in our research context. In this paper, we propose three interconnected algorithms for a better ontological representation of learning actors’ knowledge, while premising heavily on artificial intelligence approaches throughout the stages of this work. For verifying the validity of our contribution, we will implement an experiment about knowledge sources example

    Intelligent Chatbot-LDA Recommender System

    No full text
    With the proliferation of distance platforms, in particular that of an open access such as Massive Online Open Courses (MOOC), the learner finds himself overwhelmed with data which are not all efficient for his interest. Besides, the MOOC has tools that allow learners to seek information, express their ideas, and participate in discussions in an online forum. This tool is a huge repository of rich data, which continues to evolve, however its exploitation is fiddly in the search for information relevant to the learner. Similarly, the task of the tutor seems to be difficult in management of a large number of learners. To this end, the development of a Chatbot able to meet the requests of learners in a natural language is necessary to the deroulement a course in the MOOC. The ChatBot plays the role of assistant and guide for the learners and for the tutors. However, ChatBot responses come from a knowledge base, which must be relevant. Knowledge extraction to answer questions is a difficult task due to the number of MOOC participants. Learners' interactions with the MOOC platform gen-erate massive information, particularly in discussion forums by seeking answers to their questions. Identifying and extracting knowledge from online forums requires collaborative interactions between learners. In this article we propose a new approach to answer learners' questions in a relevant and instantaneous way in a ChatBot in natural language. Our model is based on the LDA Bayesian statistical method, applied to threads posted in the forum and classifies them to provide the learner with a rich semantic response. These threads taken from the discussion forum in the form of knowledge will enrich the ChatBot knowledge database. In parallel, we will map the extracted knowledge to ontology, to provide the learner with pedagogical resources that will serve as learning support

    Machine Learning Based On Big Data Extraction of Massive Educational Knowledge

    No full text
    A learning environment generates massive knowledge by means of the services provided in MOOCs. Such knowledge is produced via learning actor interactions. This result is a motivation for researchers to put forward solutions for big data usage, depending on learning analytics techniques as well as the big data techniques relating to the educational field. In this context, the present article unfolds a uniform model to facilitate the exploitation of the experiences produced by the interactions of the pedagogical actors. The aim of proposing the said model is to make a unified analysis of the massive data generated by learning actors. This model suggests making an initial pre-processing of the massive data produced in an e-learning system, and it’s subsequently intends to produce machine learning, defined by rules of measures of actors knowledge relevance. All the processing stages of this model will be introduced in an algorithm that results in the production of learning actor knowledge tree
    corecore